Recurrent Support Vector Machines For Slot Tagging In Spoken Language Understanding
نویسندگان
چکیده
We propose recurrent support vector machine (RSVM) for slot tagging. This model is a combination of the recurrent neural network (RNN) and the structured support vector machine. RNN extracts features from the input sequence. The structured support vector machine uses a sequence-level discriminative objective function. The proposed model therefore combines the sequence representation capability of an RNN with the sequence-level discriminative objective. We have observed new state-ofthe-art results on two benchmark datasets and one private dataset. RSVM obtained statistical significant 4% and 2% relative average F1 score improvement on ATIS dataset and Chunking dataset, respectively. Out of eight domains in Cortana live log dataset, RSVM achieved F1 score improvement on seven domains. Experiments also show that RSVM significantly speeds up the model training by skipping the weight updating for non-support vector training samples, compared against training using RNN with CRF or minimum cross-entropy objectives.
منابع مشابه
Spoken language classification using hybrid classifier combination
In this paper we describe an approach for spoken language analysis for helpdesk call routing using a combination of simple recurrent networks and support vector machines. In particular we examine this approach for its potential in a difficult spoken language classification task based on recorded operator assistance telephone utterances. We explore simple recurrent networks and support vector ma...
متن کاملRNN-based labeled data generation for spoken language understanding
In spoken language understanding, getting manually labeled data such as domain, intent and slot labels is usually required for training classifiers. Starting with some manually labeled data, we propose a data generation approach to augment the training set with synthetic data sampled from a joint distribution between an input query and an output label. We propose using a recurrent neural networ...
متن کاملDeep contextual language understanding in spoken dialogue systems
We describe a unified multi-turn multi-task spoken language understanding (SLU) solution capable of handling multiple context sensitive classification (intent determination) and sequence labeling (slot filling) tasks simultaneously. The proposed architecture is based on recurrent convolutional neural networks (RCNN) with shared feature layers and globally normalized sequence modeling components...
متن کاملPart-of-speech tagging and chunk parsing of spoken Dutch using support vector machines
This paper describes the design and evaluation of a part-ofspeech tagger and chunk parser for spoken Dutch, using support vector machines. The data in the Corpus Gesproken Nederlands is split into smaller sub problems to obtain reasonable training and tagging speed using various kernel types. The tagger combines good accuracy with reasonable tagging speed. The chunk parser shows good accuracy, ...
متن کاملComparing Support Vector Machines, Recurrent Networks, and Finite State Transducers for Classifying Spoken Utterances
This paper describes new experiments for the classification of recorded operator assistance telephone utterances. The experimental work focused on three techniques: support vector machines (SVM), simple recurrent networks (SRN) and finite-state transducers (FST) using a large, unique telecommunication corpus of spontaneous spoken language. A comparison is made of the performance of these classi...
متن کامل